99 research outputs found

    CONFIGR: A Vision-Based Model for Long-Range Figure Completion

    Full text link
    CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-0216); National Science Foundation (SBE-0354378); Office of Naval Research (N000014-01-1-0624

    Impact of spallation and internal radiation on fibrous ablative materials

    Get PDF
    Space vehicles are equipped with Thermal Protection Systems (TPS) that encounter high heat rates and protect the payload while entering a planetary atmosphere. For most missions that interest NASA, ablative materials are used as TPS. These materials undergo several mass and energy transfer mechanisms to absorb intense heat. The size and construction of the TPS are based on the composition of the planetary atmosphere and the impact of various ablative mechanisms on the flow field and the material. Therefore, it is essential to quantify the rates of different ablative phenomena to model TPS accurately. In this work, the impact of two ablative mechanisms is studied. The first ablative mechanism studied is spallation, a phenomenon in which the TPS material ejects particles when exposed to atmospheric entry conditions. It is typically modeled as an added percentage of safety based on the overall ablation rate. A data-driven adaptive technique was performed to numerically reconstruct particle trajectories from spallation experiments at the NASA HyMETS facility to evaluate the effects of spallation on ablative materials. Several numerical models were developed and integrated into a Lagrangian particle trajectory code to ensure accurate results of this reconstruction. More specifically, a blended drag coefficient model to compute accurate particle dynamics, a non-sphericity model to account for irregular shapes of the particles, and a backtracking model to simulate the trajectories reversed in time from the first experimental point to the ejection location on the sample were developed. The reconstructed results were analyzed statistically to provide more information on these spalled particles\u27 size and ejection parameters. The results would estimate the mass loss due to spallation and probable causes for the ejection of particles. In addition, coupling was performed between the trajectory code and a hypersonic aerothermodynamic code to evaluate the effect of these hot, chemically reactive spalled particles on the flow field. This comprehensive study in spallation provides more insights into the phenomenon and tools to quantify its impact. The second mechanism studied in this work is internal radiation. Recent laser heating experiments have concluded that spectral radiative heat fluxes penetrate the ablative materials. The penetration distance is inversely proportional to the absorption coefficient of the material at the corresponding wavelength. Since the shock layer produced at the atmospheric entry conditions around the material can be expressed as a group of lasers of different wavelengths, the radiation penetration might be significant, especially in radiation-dominated entries. A radiation transfer equation is fully coupled to the in-house material response code to evaluate the impact of radiation penetration. In addition, a band model of unequal widths for the material was developed to investigate the effect of shock layer radiation within the material. The results showed high internal temperatures and internal decomposition. The tools developed in this work can be useful in accurately modeling the heat transfer through the material

    Modeling of spallation phenomenon in an arc-jet environment

    Get PDF
    Space vehicles, while entering the planetary atmosphere, experience high loads of heat. Ablative materials are commonly used for a thermal protection system, which undergo mass removal mechanisms to counter the heat rates. Spallation is one of the ablative processes, which is characterized by the ejection of solid particles from the material into the flow. Numerical codes that are used in designing the heat shields ignore this phenomenon. Hence, to evaluate the effectiveness of spallation phenomenon, a numerical model is developed to compute the dynamics and chemistry of the particles. The code is one-way coupled to a CFD code that models high enthalpy flow field around a lightweight ablative material. A parametric study is carried out to examine the variations in trajectories with respect to ejection parameters. Numerical results are presented for argon and air flow fields, and their effect on the particle behavior is studied. The spallation code is loosely coupled with the CFD code to evaluate the impact of a particle on the flow field, and a numerical study is conducted

    Effect of repetitions on static and dynamic strength

    Get PDF
    The objective of this study was to investigate the effect of repetitions on static and dynamic strength. The study is divided into two parts, the first part investigated static and dynamic strength during one, three and six repetitions per minute, and the second part of the study analyzed dynamic strength data collected using the Multiaxial Multipurpose Isokinetic Dynamometer. The study comprised of dynamic strength test data at three speeds of one, five and ten inch per second. Five male subjects participated in the first part of the study, and the results were analyzed by plotting a time series to observe the pattern of change in strength with repetitions. The results show a linear decrease in static and dynamic strength; the rate of decrease was the highest for static six repetitions per minute and the least during the dynamic one repetition per minute test. There was a decrease of 48.72% in strength for static six repetitions/minute routine and a decrease of 5.15% during dynamic one repetition per minute routine. Plot of Median Frequency (MDF) of EMG signals showed the highest rate of fatigue occurrence during static six repetition/minute and the least during dynamic one repetition/minute routine. The results of the second part of the study also show a linear decrease in dynamic strength for the three speeds. The highest percentage decrease in strength was 15.62% during one inch per second routine, and least decrease in strength was 8.7% during the ten inch per second pull routine. During the push cycle, the highest percentage decrease in strength was 18.56% during five inch/second routine and the least decrease was 8.28% during one inch/second. Another important fact is that all subjects were able to exert a maximum strength of 64.89 lb during five inch per second pull routine. This value was greater than the strength values exerted during one and ten inch per second push and pull routines. The MDF plot of the EMG signals showed the highest rate of fatigue occurred during five inch per second routine, and the least rate during one inch per second routine

    Evaluation Of Lane Use Management Strategies

    Get PDF
    The limited funding available for roadway capacity expansion and the growing funding gap, in conjunction with the increasing congestion, creates a critical need for innovative lane use management options. Various cost-effective lane use management strategies have been implemented in the United States and worldwide to address these challenges. However, these strategies have their own costs, operational characteristics, and additional requirements for field deployment. Hence, there is a need for systematic methodologies to evaluate lane use management strategies. In this thesis, a systematic simulation-based methodology is proposed to evaluate lane use management strategies. It involves identifying traffic corridors that are suitable for lane use management strategies, and analyzing the strategies in terms of performance and financial feasibility. The state of Indiana is used as a case study for this purpose, and a set of traffic corridors is identified. From among them, a 10-mile stretch of the I-65 corridor south of downtown Indianapolis is selected as the study corridor using traffic analysis. The demand volumes for the study area are determined using subarea analysis. The performance of the traffic corridor is evaluated using a microsimulation-based analysis for alleviating congestion using three strategies: reversible lanes, high occupancy vehicle (HOV) lanes and ramp metering. Furthermore, an economic evaluation of these strategies is performed to determine the financial feasibility of their implementation. Results from the simulation based analysis indicate that the reversible lanes and ramp metering strategies improve traffic conditions on the freeway in the major flow direction. Implementation of the HOV lane strategy results in improved traffic flow conditions on the HOV lanes but aggravated congestion on the general purpose lanes. The HOV lane strategy is found to be economically infeasible due to low HOV volume on these lanes. The reversible lane and ramp metering strategies are found to be economically feasible with positive net present values (NPV), with the NPV for the reversible lane strategy being the highest. While reversible lanes, HOV lanes and ramp metering strategies are effective in mitigating congestion by optimizing lane usage, they do not generate additional revenue required to reduce the funding deficit. Inadequate funds and worsening congestion have prompted federal, state and local planning agencies to explore and implement various congestion pricing strategies. In this context, the high occupancy toll (HOT) lanes strategy is explored here. Equity concerns associated with pricing schemes in transportation systems have garnered increased attention in the recent past. Income inequity potentially exists under the HOT strategy whereby higher-income travelers may reap the benefits of HOT lane facilities. An income-based multi-toll pricing approach is proposed for a single HOT lane facility in a network to simultaneously maximize the toll revenue and address the income equity concern, while ensuring a minimum level-of-service on the HOT lanes and that the toll prices do not exceed thresholds specified by a regulatory entity. The problem is modeled as a bi-level optimization formulation. The upper level model seeks to maximize revenue for the tolling authority subject to pre-specified upper bounds on toll prices. The lower level model solves for the stochastic user equilibrium solution based on commuters\u27 objective of minimizing their generalized travel costs. Due to the computational intractability of the bi-level formulation, an approximate agent-based solution approach is used to determine the toll prices by considering the tolling authority and commuters as agents. Results from numerical experiments indicate that a multi-toll pricing scheme is more equitable and can yield higher revenues compared to a single toll price scheme across all travelers

    Biologically Inspired Approaches to Automated Feature Extraction and Target Recognition

    Full text link
    Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.Air Force Office of Scientific Research (F40620-01-1-0423); National Geographic-Intelligence Agency (NMA 201-001-1-2016); National Science Foundation (SBE-0354378; BCS-0235298); Office of Naval Research (N00014-01-1-0624); National Geospatial-Intelligence Agency and the National Society of Siegfried Martens (NMA 501-03-1-2030, DGE-0221680); Department of Homeland Security graduate fellowshi

    Management of distal femoral fractures treated with locking compression plate: a prospective study

    Get PDF
    Background: Distal femoral fractures are extremely common and represent 6-8% of all the femoral fractures treated by orthopaedic surgeons. Near anatomical reduction is most important in these fractures to obtain near good functional results. A variety of treatment options are proposed for distal femoral fractures of the treatment with distal femoral LCP has yielded good results. In this study we did a prospective study of 30 patients treated with locking compression plates.Methods: There were 30 patients both male and female of different age groups treated by plate. All the patients were followed up in orthopaedic department prospectively for 12 months between November 2017 to November 2018. The functional and radiological outcomes were assessed.Results: The study included 30 patients both male and female in age group between 20 and above 60. Average follow up was 12 months using Neer’s scoring system we had excellent 60%, good 26.6%, fair 6.7%, poor 6.7%.Conclusions: The locking compression plate along with active physiotherapy proved to be better for distal femoral fractures

    Screen Space Ambient Occlusion Using Partial Scene Representation

    Get PDF
    Screen space ambient occlusion (SSAO) is a technique in real-time rendering forapproximating amount by which a point on a surface is occluded by surrounding geometry, whichhelps in adding soft shadows to diffuse objects. Most of the current methods use the depth bufferas an approximation to scene geometry to sample the occlusion factor. We introduce a noveltechnique which uses a partial representation of the scene (here triangle information in screenspace) using compact triangle storage and a ray-marching approach to find a betterapproximation of the occlusion factor.Computer Scienc

    Search for Isocurvature with Large-scale Structure: A Forecast for Euclid and MegaMapper using EFTofLSS

    Full text link
    Isocurvature perturbations with a blue power spectrum are one of the natural targets for the future large scale structure observations which are probing shorter length scales with greater accuracy. We present a Fisher forecast for the Euclid and MegaMapper (MM) experiments in their ability to detect blue isocurvature perturbations. We construct the theoretical predictions in the EFTofLSS and bias expansion formalisms at quartic order in overdensities which allows us to compute the power spectrum at one loop order and bispectrum at tree level and further include theoretical error at the next to leading order for the covariance determination. We find that Euclid is expected to provide at least a factor of few improvement on the isocurvature spectral amplitude compared to the existing Planck constraints for large spectral indices while MM is expected to provide about 1 to 1.5 order of magnitude im provement for a broad range of spectral indices. We find features that are specific to the blue isocurvature scenario including the leading parametric degeneracy being with the Laplacian bias and a UV sensitive bare sound speed parameter.Comment: v2: 45 pages (32+13), 9 figures, 2 tables, minor corrections (results same as in v1
    • …
    corecore